Achieving Goals Through Interaction With Sensors And Actuators
نویسندگان
چکیده
In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot’s actuators. Our contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. Our theory is based on the premise that proper application of knowledge increases the robustness of plan execution. We propose to produce the detailed plan of primitive actions and execute it by using primitive components that contain domain specific knowledge and knowledge about the available sensors and actuators. These primitives perform signal and control processing as well as serve as an interface to high-level planning processes. In this work, importance is placed on determining what information is relevant to achieve the goal as well as determining the details necessary to utilize the sensors and actuators. I. KNOWLEDGE AT EXECUTION TIME To be useful in the real world robots need to be able to move safely in unstructured environments and achieve their given tasks despite unexpected changes in their environment or failures of some of their sensors. The variability of the world makes it impractical to develop very detailed plans of actions before execution since the world might change before execution begins and thus invalidate the plan. In this paper we describe our theory for determining the sensor and actuator commands necessary to execute a given abstract-goal and then execute it. The abstract-goal is a single, high level goal of the form that could be produced from a classical planning system. Our theory is based on the premise that proper use of knowledge increases the robustness of plan execution without reducing the ability to react to the environment [1], [2], [3]. Usually, execution requires some amount of information from the external world. If at planning time the robot had a perfect model of the world, the processing to execute the abstract-goal could be greatly reduced and could even be performed in its entirety at planning time. In practice, since the world is dynamic and unpredictable, a perfect model of the world will never exist. Attempting to This work was funded in part by the NSF under grants NSF/CCR8715220 and NSF/CDA-9022509, and by the ATT Foundation. determine, prior to execution, everything needed to achieve a goal across the domain of abstract-goals, external world states, and sensor availability would be an endless task. The central problem is defining how to transform the given abstract-goal into an explicit representation of what is necessary to achieve the goal. We call this transformation process Explication. Executing an abstract-goal is difficult because the abstract-goal implicitly represents a large collection of information and details. To be executed, a plan must explicitly specify the details of how to utilize the sensors and actuators. For the same abstract-goal, different environment situations may require different sensors, different programming and control of the sensors, and different strategies. Explication is also difficult because of the need to adapt to the environment as the command is being executed. Because of this dependency on the current situation it must occur at execution time. In order to further understand the Explication process, we have broken it down into subprocesses. Explication for different abstractgoals is accomplished through different combinations of the subprocesses. The Explication process consists of: (a) determining the relevant information from the abstract-goal and current world state as sensed by the robot’s sensors; (b) decomposing the abstract-goal into sub-goals; (c) selecting the source of the relevant information; (d) collecting the relevant information and executing primitive commands on sensors or actuators; (e) detecting and resolving conflicts which occur between subgoals; (f) using the information collected in a feedback control loop to control the sensors and actuators; (g) monitoring the relevant information to detect goal accomplishment and error conditions. Though each of these subproblems is individually solvable, the real challenge is in combining them. For example, an abstract-goal for turning the robot towards a moving object could be defined as a feedback control with very little knowledge used within decomposing, determining, and selecting. In comparison, an abstract-goal for moving the robot through a doorway can contain detailed knowledge on decomposing, determining, detecting and resolving conflicts, etc. Explication knowledge decomposes it into sub-abstract-goals of finding the doorway, and moving towards it, as well as feedback control mapping of sensor-sweeping an area, searching for the doorway until found, and progressively moving through the doorway until clearing it. Two concepts are crucial for Explication. First is the notion of Relevant Information Need. Explication must deal with the question of "what information is relevant and thus needed in order to execute this abstract-goal?". Explication views such information, and thus the need for it, abstractly. Implicit goal abstraction is replaced by explicit informational need abstraction. To put it another way, the abstract goal: "how to achieve X", is replaced by two things: a) explicit knowledge on "how to achieve" using the sensors and actuators and b) an explicit representation of the needed information. This representation uses abstraction to reduce dependency on the source of the information. Abstract informational needs can then be met by abstract information providers (i. e. sensors). We have utilized the concept of "Logical Sensor" [4], [5] to be such an abstract information provider. Equally crucial is the second concept of Utilization-Detail. Explication must deal with the question of "what details of using sensors, actuators, and processes are necessary in order to execute this abstract-goal?" Again, abstraction is used to explicitly represent the need of detailed control of sensors and actuators, where control primitives are commands and their parameters for both discrete and continuous actions. We have extended the Logical Sensor concept to allow other logical entities, thus accounting for this explicit abstraction. Since the abstraction of a goal is hierarchical, Explication uses the above two concepts at each level of the hierarchy. On a single level of abstraction (i.e. for a single abstract-goal), Explication uses knowledge to explicitly represent the information and detail needs. These needs are met by Logical Sensors and other logical entities. These entities either map directly to the corresponding information/details or they each apply Explication hierarchically. Thus, more knowledge is used to explicitly determine further information and detail needs. We propose a framework for Explication, called the Logical Sensors/Actuators (LSA). The framework is being implemented in an object oriented programming environment resulting in a flexible robotic sensor/control system which we call the Logical Sensor Actuator Testbed (LSAT). The framework consists of the knowledge, mechanisms, and structures used by the Explication process. Primarily, the framework is a collection of reconfigurable components and flexible component structures which can be combined to achieve abstract-goals through execution. One of the purposes of the framework is to organize/differentiate between the domain dependent and independent mechanisms/structures. This will allow us to implement a platform-independent testbed, and to perform empirical studies on the use of knowledge during execution and the identification of relevant information and Utilization-details. II. IMPLEMENTATION OF LSAT The LSAT framework is being implemented on a SUN SPARC 4/330 computer which interacts with a TRC Labmate mobile robot over two separate serial lines. The system is being written in C, Lucid Common Lisp with the Common Loops Object System (CLOS) and LispView windowing system. An object oriented approach has been used in the implementation, where an object corresponds to a Logical Sensor/Actuator (LSA) entity. The Labmate mobility control is through setting registers for velocity, acceleration, turning radius, and operation modes. It has dead reckoning (wheel and steering counters), bumper sensors, and two active sensors. The first is a ring of 24 polaroid acoustic sensors which have a range of six inches to thirty-five feet. These sensors encircle the robot. The second is a set of infra-red, single beam proximity sensors which can detect the existence (but not range to) of an object up to thirty inches away. These sensors are mounted on the corners of the robot facing forward, to the sides, and to the rear (8 sensors in all). Within the LSAT framework, we have developed five classes of LSA which are used to implement the framework’s objects. The first, SENSOR, takes raw/processed data as input and outputs data which is further processed (ie. sensor processing). The next class, DRIVER, accepts as input multiple commands for the actual hardware/drivers. This class acts as an interface to the hardware and performs command scheduling, minor command conflict resolution, and routes major conflicts to its controlling LSA. Another class is the GENERATOR, which accepts sensor data as input and outputs a command meant for a DRIVER. This class can be viewed as a low level, feedback control, looping mechanism between a sensor and the actuator. The MATCHER class is much like the SENSOR class in that it takes as input sensor data and processes them for output. The difference is that it also takes as input a description of a goal or error situation, and the processing consists of matching the goal/error to the input sensor data. The output is simply a measurement of the matching process. The last class is the CONTROLLER. This class accepts processed data from any other class, as well as commands and parameter-values from other CONTROLLERs higher-up in the hierarchy. The output is the control commands and parameter-values (referred to earlier as Utilization-Detail) to its sub-LSAs. CONTROLLERS are the main entity used to implement Explication. They control the other LSAs, and manipulate them through the process of Explication. The implementation of Explication has been defined as a set of functions which correspond to the subproblems of Explication described earlier. We have developed an algorithm which combines these functions into a complete process and are experimenting with it on the Labmate robot.
منابع مشابه
MULTIOBJECTIVE OPTIMIZATION OF SENSOR PLACEMENT IN WATER DISTRIBUTION NETWORKS DUAL USE BENEFIT APPROACH
Location and types of sensors may be integrated for simultaneous achievement of water security goals and other water utility objectives, such as regulatory monitoring requirements. Complying with the recent recommendations on dual benefits of sensors, this study addresses the optimal location of these types of sensors in a multipurpose approach. The study presents two mathematical models for ...
متن کاملBindings and RESTlets: A Novel Set of CoAP-Based Application Enablers to Build IoT Applications
Sensors and actuators are becoming important components of Internet of Things (IoT) applications. Today, several approaches exist to facilitate communication of sensors and actuators in IoT applications. Most communications go through often proprietary gateways requiring availability of the gateway for each and every interaction between sensors and actuators. Sometimes, the gateway does some pr...
متن کاملDetermining Robot Actions For Tasks Requiring Sensor Interaction
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the rob...
متن کاملنقش درمان ترکیبی موسیقی بههمراه بازی در تحول اجتماعی، جسمی و رفتاری کودکان با اختلال طیف اتیسم
Background: Autism is a neurodevelopment disorder that usually appears in early years and its main characteristics are repeated problems in social interaction, abnormal verbal and nonverbal communication, and stereotyped patterns of behavior and interests. Two of the therapies that are currently being discussed are music therapy and play therapy. Music with play is one of the skills necessary f...
متن کاملDesigning Interactions with Furniture: Towards Multi-Sensorial Interaction Design Processes for Interactive Furniture
In this paper, we argue for novel user experience design methods, in the context of reimagining ergonomics of interactive furniture. There is a need for improving both creativity and productivity at the workplace, and there is ample room for scientific advancements brought by embedded systems, sensors and actuators which can now be part of future pieces of furniture. Creative industries’ worker...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1992